Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Aspect-level sentiment analysis model based on alternating‑attention mechanism and graph convolutional network
Xianfeng YANG, Yilei TANG, Ziqiang LI
Journal of Computer Applications    2024, 44 (4): 1058-1064.   DOI: 10.11772/j.issn.1001-9081.2023040497
Abstract57)   HTML3)    PDF (943KB)(25)       Save

Aspect-level sentiment analysis aims to predict the sentiment polarity of specific target in given text. Aiming at the problem of ignoring the syntactic relationship between aspect words and context and reducing the attention difference caused by average pooling, an aspect-level sentiment analysis model based on Alternating-Attention (AA) mechanism and Graph Convolutional Network (AA-GCN) was proposed. Firstly, the Bidirectional Long Short-Term Memory (Bi-LSTM) network was used to semantically model context and aspect words. Secondly, the GCN based on syntactic dependency tree was used to learn location information and dependencies, and the AA mechanism was used for multi-level interactive learning to adaptively adjust the attention to the target word. Finally, the final classification basis was obtained by splicing the corrected aspect features and context features. Compared with the Target-Dependent Graph Attention Network (TD-GAT), the accuracies of the proposed model on four public datasets increased by 1.13%-2.67%, and the F1 values on five public datasets increased by 0.98%-4.89%, indicating the effectiveness of using syntactic relationships and increasing keyword attention.

Table and Figures | Reference | Related Articles | Metrics
Text classification model combining word annotations
Xianfeng YANG, Jiahe ZHAO, Ziqiang LI
Journal of Computer Applications    2022, 42 (5): 1317-1323.   DOI: 10.11772/j.issn.1001-9081.2021030489
Abstract313)   HTML31)    PDF (1662KB)(283)       Save

The traditional text feature representation method cannot fully solve the polysemy problem of word. In order to solve the problem, a new text classification model combining word annotations was proposed. Firstly, by using the existing Chinese dictionary, the dictionary annotations of the text selected by the word context were obtained, and the Bidirectional Encoder Representations from Transformers (BERT) encoding was performed on them to generate the annotated sentence vectors. Then, the annotated sentence vectors were integrated with the word embedding vectors as the input layer to enrich the characteristic information of the input text. Finally, the Bidirectional Gated Recurrent Unit (BiGRU) was used to learn the characteristic information of the input text, and the attention mechanism was introduced to highlight the key feature vectors. Experimental results of text classification on public THUCNews dataset and Sina weibo sentiment classification dataset show that, the text classification models combining BERT word annotations have significantly improved performance compared to the text classification models without combining word annotations, the proposed BERT word annotation _BiGRU_Attention model has the highest precision and recall in all the experimental models for text classification, and has the F1-Score of reflecting the overall performance up to 98.16% and 96.52% respectively.

Table and Figures | Reference | Related Articles | Metrics